Skip to main content

The Organization of the Future: Smaller Teams, Harder Constraints

The first post in this series argued that AI is moving engineering up the stack, away from implementation and toward systems, architecture, and governance. The second argued that this changes what engineers do: judgment replaces implementation as the primary constraint, and the best engineers become orchestrators rather than builders.

The third question—the one boards and executive teams most need to answer—is what this means for how engineering organizations are designed.

It’s not mainly a question of how many engineers to hire, or which AI tools to buy. It’s a structural question about how organizations produce, govern, and sustain software at all.

Smaller teams, larger scope #

The industrial logic of large software teams was always a response to scarcity. Building production software was hard and slow. You needed specialists: engineers to write code, QA to test it, architects to design it, SREs to operate it, security teams to review it. Division of labor was the only way to move at any meaningful pace.

AI changes the underlying scarcity. When implementation, testing, and refactoring get significantly cheaper, the minimum viable team for shipping production software shrinks. A small group of engineers with strong judgment can do what previously required a much larger organization.

That doesn’t mean the enterprise software organization disappears. Large organizations have large systems, and large systems require coordination, governance, and sustained investment regardless of how code is generated. But the structure of those organizations will change. The trend is toward smaller, more autonomous teams, each owning a slice of the system end to end—from design through deployment to operation—working within shared infrastructure and shared constraints.

The best engineering organizations have been moving in this direction for years. What AI changes is the economics. Teams that once needed eight engineers to build, test, and operate a service may need three. The minimum viable unit gets smaller, and the argument for autonomous, full-ownership teams gets harder to ignore.

For organizational design, the implication is real: headcount alone becomes a poor proxy for engineering capacity. What matters more is how well teams are structured, how clearly they own their systems, and how effectively the organization has invested in the shared infrastructure that makes small teams viable.

Quality is an organizational problem now #

The previous post described three layers of quality: code quality, architecture quality, and service maturity. As AI automates more of the first layer, organizations have to invest deliberately in the second and third.

Service maturity—monitoring, ownership, security, documentation, rollout discipline, compliance—is where most organizations have the most ground to cover. It’s also the layer most likely to be exposed by AI-driven velocity. When teams can ship faster, the gaps in operational discipline surface faster. Services without clear owners, without runbooks, without alerting that anyone acts on—these accumulate quietly and create compounding risk.

Here’s what makes service maturity different from code quality: you can’t fix it one engineer at a time. It’s an organizational problem. It requires consistent standards, enforcement mechanisms, and the institutional will to treat operational readiness as a hard gate on what ships, not a box to be checked after the fact.

Organizations that treat service maturity as optional will find that AI-driven velocity amplifies their existing operational debt. Those that treat it as foundational will find that it becomes the condition that lets them move fast sustainably.

Quality moves up the stack—from individual code, to system architecture, to organizational discipline.

Constraints become infrastructure #

In the traditional model, quality was enforced by people. Senior engineers reviewed code. Architects approved designs. SREs gatekept production access. The expertise lived in individuals, and the organization relied on that expertise being in the right place at the right time.

That model doesn’t scale when development velocity increases. Manual oversight has fixed throughput. If AI can generate hundreds of changes per week, a review process that depends on experts touching everything becomes the bottleneck—or it gets bypassed entirely.

The alternative is to embed constraints into the system rather than the people.

Required monitoring before a service can be marked production-ready. Automated security scanning before deployment. Ownership records that must be kept current. Documentation standards enforced by tooling rather than convention.

These aren’t bureaucratic overhead. They’re infrastructure.

Organizations that understand this will invest in constraint infrastructure: service catalogs, maturity models, internal developer platforms that track the operational readiness of every service—not as compliance theater, but as genuine leverage. A way to ensure that the velocity AI provides doesn’t create fragility faster than it creates value.

Organizations that don’t invest here will still build constraints, but reactively, after incidents, under pressure, in ways that create friction without creating clarity.

Constraints enable speed #

The counterintuitive thing about this model is that more constraints, implemented well, produce faster organizations—not slower ones.

The friction in engineering organizations rarely comes from having too many rules. It comes from unclear, inconsistently enforced, or undocumented rules—the kind that require engineers to negotiate expectations over and over.

Clear constraints remove that ambiguity. When a team knows exactly what’s required to ship and operate a service—what monitoring must exist, what security review is required, what documentation must be current—they can move without constantly second-guessing whether they’re doing it right.

Think about what makes a good API fast to work with. A well-defined contract reduces the number of decisions you have to make. Organizational constraints work the same way. They shift energy from coordination and negotiation to actual building.

In an AI-driven environment, this dynamic gets more pronounced. AI can generate change at a rate that overwhelms coordination-dependent organizations. The ones that can absorb that velocity without losing coherence are the ones that have invested in explicit, automated, scalable constraints.

Platform engineering grows up #

One consequence of this shift is that platform engineering expands in scope and importance.

The traditional platform engineering mandate was to make deployment, infrastructure, and observability easier. Build the pipelines, manage the cloud costs, run the Kubernetes clusters.

The new mandate is broader: manage service lifecycle quality across the organization.

That means not just providing deployment tooling, but defining and enforcing the standards that determine whether a service is ready to be deployed. Not just observability infrastructure, but owning the frameworks that define what to observe and how systems should fail. Platform engineering becomes central to the organization’s ability to scale safely, and the platform itself becomes something more than shared tooling—it’s the infrastructure for organizational quality.

Organizations that underinvest here will find their engineering organizations scaling in headcount without scaling in coherence.

The design challenge #

The organizations that navigate this transition well will look different from those built around the assumptions of the last decade—smaller teams with broader ownership, constraints enforced by tooling rather than convention, platform engineering as lifecycle governance rather than just infrastructure provision, and investment in service maturity that actually keeps pace with development velocity.

None of this is straightforward. It requires organizational discipline, leadership alignment, and a willingness to invest in infrastructure that isn’t directly visible in the product. It requires treating engineering governance as a strategic capability.

The alternative is to absorb increased development velocity without adapting organizational structure. That has a predictable result: complexity accumulates faster, operational fragility grows, and failures become more expensive. Moving fast stops being an advantage when the system can’t hold what you’re building.

AI changes how software is built. It also changes how the organizations that build it need to be structured.